Two SVDs produce more focal deep learning representations
نویسندگان
چکیده
A key characteristic of work on deep learning and neural networks in general is that it relies on representations of the input that support generalization, robust inference, domain adaptation and other desirable functionalities. Much recent progress in the field has focused on efficient and effective methods for computing representations. In this paper, we propose an alternative method that is more efficient than prior work and produces representations that have a property we call focality – a property we hypothesize to be important for neural network representations. The method consists of a simple application of two consecutive SVDs and is inspired by (Anandkumar et al., 2012). In this paper, we propose to generate representations for deep learning by two consecutive applications of singular value decomposition (SVD). In a setup inspired by (Anandkumar et al., 2012), the first SVD is intended for denoising. The second SVD rotates the representation to increase what we call focality. In this initial study, we do not evaluate the representations in an application. Instead we employ diagnostic measures that may be useful in their own right to evaluate the quality of representations independent of an application. We use the following terminology. SVD (resp. SVD) refers to the method using one (resp. two) applications of SVD; 1LAYER (resp. 2LAYER) corresponds to a single-hidden-layer (resp. twohidden-layer) architecture. In Section 1, we introduce the two methods SVD and SVD and show that SVD generates better (in a sense to be defined below) representations than SVD. In Section 2, we compare 1LAYER and 2LAYER SVD representations and show that 2LAYER representations are better. Section 3 discusses the results. We present our conclusions in Section 4.
منابع مشابه
Deep Unsupervised Domain Adaptation for Image Classification via Low Rank Representation Learning
Domain adaptation is a powerful technique given a wide amount of labeled data from similar attributes in different domains. In real-world applications, there is a huge number of data but almost more of them are unlabeled. It is effective in image classification where it is expensive and time-consuming to obtain adequate label data. We propose a novel method named DALRRL, which consists of deep ...
متن کاملLearning Multi-channel Deep Feature Representations for Face Recognition
Deep learning provides a natural way to obtain feature representations from data without relying on hand-crafted descriptors. In this paper, we propose to learn deep feature representations using unsupervised and supervised learning in a cascaded fashion to produce generically descriptive yet class specific features. The proposed method can take full advantage of the availability of large-scale...
متن کاملNew Techniques in Deep Representation Learning
New Techniques in Deep Representation Learning Galen Andrew Chair of the Supervisory Committee: Associate Professor Emanuel Todorov CSE, joint with AMATH The choice of feature representation can have a large impact on the success of a machine learning algorithm at solving a given problem. Although human engineers employing taskspecific domain knowledge still play a key role in feature engineeri...
متن کاملLearning Deep Representations for Scene Labeling with Semantic Context Guided Supervision
Scene labeling is a challenging classification problem where each input image requires a pixel-level prediction map. Recently, deep-learning-based methods have shown their effectiveness on solving this problem. However, we argue that the large intra-class variation provides ambiguous training information and hinders the deep models’ ability to learn more discriminative deep feature representati...
متن کاملUnsupervised Feature Learning and Deep Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although domain knowledge can be used to help design representations, learning can also be used, and the quest for AI is motivating the design of m...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1301.3627 شماره
صفحات -
تاریخ انتشار 2013